The Hollywood Sign is Not on Fire: Deepfakes and Misinformation Spread in L.A. Wildfires

Amid the devastation of the Los Angeles County wildfires – scorching an area twice the size of Manhattan – McAfee threat researchers have identified and verified a rise in AI-generated deepfakes and misinformation, including startling but false images of the Hollywood sign engulfed in flames. 

Debunking the Myth: Hollywood Sign Safe Amid Wildfire Rumors on Social Media

Social media and local broadcast news have been flooded with deceptive images claiming the Hollywood sign is engulfed in flames, with many people alleging that the iconic landmark is “surrounded by fire.” 

Fact check: The Hollywood sign is still standing and is intact, confirmed Jeff Zarrinnam, chairman of the Hollywood Sign Trust. 

Zarrinnam, clarified to Reuters that the landmark remains unharmed and suggested that the misleading posts circulating online are likely created using AI-generated images and videos. According to the California Department of Forestry and Fire Protection, the Hollywood sign, perched on Mount Lee in the Santa Monica Mountains, is far from any current evacuation zones. To date, there have been no reports of fires near the Hollywood sign. 

McAfee researchers have examined several images being shared on social media, and we can confirm they are deepfakes. McAfee’s deepfake detection technology flags the image of the Hollywood Hills as AI-generated, with the fire serving as a key factor in its analysis. Further investigation traced the image back to Gemini, an AI-based image generation platform. This finding underscores the increasing sophistication of fake image synthesis and the continuous advancements in McAfee’s deepfake detection tools. McAfee CTO, Steve Grobman states, “AI tools have supercharged the spread of disinformation and misinformation, enabling false content—like recent fake images of the Hollywood sign engulfed in flames—to circulate at unprecedented speed. This makes it critical for social media users to keep their guard up, approach viral posts with skepticism, and verify sources to distinguish fact from fiction.”

Figure 1. McAfee’s AI heat maps identify areas of AI image manipulation

When Social Media Fans the Flames of Misinformation 

AI-generated still images are incredibly easy to produce. In just minutes, we were able to produce a convincing image of the Hollywood Hills sign on fire for free with AI image generating Android app. Many of these apps exist to choose from. Some do filter for violent and other objectionable content. However, images like the Hollywood Hills sign on fire, fall outside of normal guardrails. Additionally, the business model of many of these apps include free credits as a trial, making it quick and easy to create and share. AI image generation is a widely available and easily accessible tool used in many misinformation campaigns. 

Exploring the feeds from the likes of X, Instagram, TikTok, and YouTube we were quick able to find many examples of misinformation being spread.

Figure 2. Misinformation examples on Instagram and X.

Upon closer inspection, a number of images being shared on social media still contain a watermark. Many AI-generated images are shared widely without scrutiny, with some creators even leaving visible AI-generated watermarks on them. Unfortunately, most people don’t know what to look for, allowing these fake images to quickly circulate and gain millions of views in a short period of time. As our evidence demonstrates, this lack of awareness contributes to the rapid spread of misinformation, amplifying the impact of these deceptive visuals.

Figure 3. The Grok watermark is clearly visible in the image above.

How to Identify a Deepfake

There are several straightforward steps that you can take to spot a fake. Even as AI tools create increasingly convincing deepfakes, a consistent truth applies.

Slow down.

Malicious deepfakes share something in common. They play on emotions. And they play to biases as well. By stirring up excitement about a “guaranteed” investment or outrage at the apparent words of a politician or public figure, deepfakes cloud judgment. That’s by design. It makes deepfakes more difficult to spot because people want to believe them on some level. With that, slow down. Especially if you see something that riles you up. This offers one of the best ways to spot a fake. From there, the next step is to validate what you’ve seen or heard.

Consider who did the posting.

Because what you’re seeing got posted on social media, you can see who posted the piece of content in question. If it’s a friend, did they repost it? Who was the original poster? Could it be a bot or a bogus account? How long has the account been active? What kind of other posts have popped up on it? If an organization posted it, look it up online. Does it seem reputable? This bit of detective work might not provide a definitive answer, but it can let you know if something seems fishy.

Seek another source.

Whether they aim to spread disinformation, commit fraud, or rile up emotions, malicious deepfakes try to pass themselves off as legitimate. Consider a video clip that looks like it got recorded at a press conference. The figure behind the podium says some outrageous things. Did that really happen? Consult other established and respected sources. If they’re not reporting on it, you’re likely dealing with a deepfake.

Moreover, they might report that what you’re looking at is a deepfake that’s making the rounds on the internet. A technique called SIFT can help root out a fake. It stands for: Stop, Investigate the source, Find better coverage, and Trace the media to the original context. With the SIFT method, you can indeed slow down and determine what’s real.

Have a professional fact-checker do the work for you.

De-bunking fake news takes time and effort. Often a bit of digging and research too. Professional fact-checkers at news and media organizations do this work daily. Posted for all to see, they provide a quick way to get your answers. Some fact-checking groups include:

· Politifact.com

· Snopes.com

· FactCheck.org

· Reuters Fact Check

· AP Fact Check

What are typical signs of a deepfake?

This gets to the tricky bit. The AI tools for creating deepfakes continually improve. It’s getting tougher and yet tougher still to spot the signs of a deepfake. The advice we give here now might not broadly apply later. Still, bad actors still use older and less sophisticated tools. As such, they can leave signs.

How to spot AI-generated text.

Look for typos. If you spot some, a human likely did the writing. AI generally writes clean text when it comes to spelling and grammar.

Look for repetition. AI chatbots get trained on volumes and volumes of text. As such, they often latch onto pet terms and phrases that they learned as they were trained. Stylistically, AI chatbots often overlook that repetition.

Look for style (or lack thereof). Today’s chatbots are no Ernest Hemmingway, Mark Twain, or Vladimir Nabokov. They lack style. The text they generate often feels canned and flat. Moreover, they tend to spit out statements, yet with little consideration for how they flow together.

Zoom in. A close look at deepfake photos often reveals inconsistencies and flat-out oddities.

Be skeptical. Always

With AI tools improving so quickly, we can no longer take things at face value. Malicious deepfakes look to deceive, defraud, and disinform. And the people who create them hope you’ll consume their content in one, unthinking gulp. Scrutiny is key today. Fact-checking a must, particularly as deepfakes look sharper and sharper as the technology evolves.

Plenty of deepfakes can lure you into sketchy corners of the internet. Places where malware and phishing sites take root. Consider using comprehensive online protection software with McAfee+ and McAfee Deepfake Detector to keep safe. In addition to several features that protect your devices, privacy, and identity, they can warn you of unsafe sites too.

 

 

FacebookLinkedInTwitterEmailCopy Link

Stay Updated

Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.

FacebookTwitterInstagramLinkedINYouTubeRSS

More from Internet Security

Back to top